QM II - W1

Photo by Jeremy Bishop on Unsplash
$\newcommand{\dede}[2]{\frac{\partial #1}{\partial #2} } \newcommand{\dd}[2]{\frac{d #1}{d #2}} \newcommand{\divby}[1]{\frac{1}{#1} } \newcommand{\typing}[3][\Gamma]{#1 \vdash #2 : #3} \newcommand{\xyz}[0]{(x,y,z)} \newcommand{\xyzt}[0]{(x,y,z,t)} \newcommand{\hams}[0]{-\frac{\hbar^2}{2m}(\dede{^2}{x^2} + \dede{^2}{y^2} + \dede{^2}{z^2}) + V\xyz} \newcommand{\hamt}[0]{-\frac{\hbar^2}{2m}(\dede{^2}{x^2} + \dede{^2}{y^2} + \dede{^2}{z^2}) + V\xyzt} \newcommand{\ham}[0]{-\frac{\hbar^2}{2m}(\dede{^2}{x^2}) + V(x)} \newcommand{\konko}[2]{^{#1}\space_{#2}} \newcommand{\kokon}[2]{_{#1}\space^{#2}} $ # Content $\newcommand{\L}{\mathcal L}$ ## Intro - Raphael Zumbrunn - rzumbrunn@ethz.ch - https://zura.ch - Master Interdisciplinary Sciences -> Quantum Sensing Your names, studies & favorite baked good ### What is the class not about: - Going over all the derivations of the previous exercises - Solving the exercises in class - Re-deriving complicated results ### What is this class about: - Build intuition, not provided in lecture - Give motivations for applications of content - Give alternative explanation - General tips and tricks for solving the exercises - Concept questions - Asking questions (your job) I recommend checking out the other exercise classes, so you can find the one that actually helps you understand the course. ## Motivation - Intro to many-body physics & second quantisation - Precursor to QFT - Particle Physics - Solid state physics & Quasi-particles - Basis for Statistical physics ## Repe AM: While Newtonian Mechanics is often seen as the most intuitive formulation of mechanics it quickly gets very complicated for larger systems. In General Mechanics you learned two additional formalisms: - Lagrangian Mechanics and - Hamiltonian Mechanics Both revolved around the concept of Energy rather than Forces. Quick side note. The move from Force to Energy (a vectorial quantity to a scalar one) is the same we do in electrodynamics when we move from *Fields* to *Potentials* ### Lagrangian Mechanics Lagrangian mechanics revolves around the Lagrangian $$\L = T-V$$ and the integral thereof, the action $$S[q] = \int_{t_{0}}^{t_{1}} \L (q(t), \dot q(t), t) dt$$ Any evolution of a system will extremalize the action, which we called the "principle of least action". #### Optical analogy In optics we have a similar principle: Fermats principle. Which states that light will take the path which extremalizes the optical path length. ##### Small Analysis II recap: The length of a path $\gamma$: We parametrize some path using $\gamma: [a,b] \to \mathbb R^{n}$ We can interpret $$|\dot \gamma| = \sqrt{\dot \gamma_{1}^{2} + \ldots + \dot \gamma_{n}^{2}}$$ as the velocity along the path. We know that the integral over the velocity has to be the distance, thus: $d = \int_{a}^{b} |\dot \gamma|$ Note that this is independent of our parametrisation (how fast we move along the path!) ##### Fermats Principle The optical path length is defined as the distance $d$ times the refractive index of the medium $n$. Or the integral of $n$ over $d$ in the more general case: $$OPL = \int_{\gamma}n d\vec q$$ I will now write fermats principle in a slightly strange way: $$S[q] =\int_{\gamma} n(\vec q) \left| \frac{d{\vec q}}{dt}\right| dt =\int_{\gamma} n(\vec q) |\dot{\vec q}| dt$$ I here reparametrized the path in $t$ instead of $x$. By analogy we can thus find the "optical lagrangian" $\L = n(\vec q)|\dot{\vec q}|$ This motivates the "Lawnmower analogy" for snells law. Because it uses friction (a force that scales with $\dot q$) to motivate how light has to refract. This analogy is not just a fancy trick, it is actually a pretty physically accurate analog model! #### Optical Mechanics What lagrange thus did is bring the reasoning we know from optics in contact with the reasoning we know from mechanics. #### Advantages of Lagrangian - Scalar, not vectorial - Can treat interactions between particles quite well - Can directly include constraints & boundary conditions (during minimization process) - Easily follows transformations via reparametrisation #### Concept question: Given the principle of least action, which minimizes $T$, and maximizes $V$, why does the harmonic oscillator oscillate (which means $T$ will never be zero), and not just go to the lowest point and stay there? The conceptual problem lies in: A) Energy is always conserved in Lagrangian mechancics B) We confuse $\L$ with $S$ C) The harmonic oscillator will actually end up in the ground state D) None of the above __spoiler__ The correct answer is $B$. While the oscillatory motion does not _instantaneously minimize the Lagrangian, it minimizes the integral _over_ the _Lagrangian_ i.e. the *Action* (S) We can acually go through the case where the oscillator simply goes to the resting position. - After it has reached the position it will stay there: $L = 0 \implies S = 0$ - But to reach that state it needs to stop at some point. We can use an Ansatz $q(t) = exp(\frac{-t}{\tau})$. This will lead to $S[q] = \int \frac{1}{2}m \dot q^{2}- \frac{1}{2} kq^{2}$ $S[q] = \int \frac{1}{2 \tau^{2}}m q^{2}- \frac{1}{2} kq^{2}$ This is minimal for $\tau \to \infty$. Which means no damping. i.e. the system damping itself does not minimize the action. We can think of this as some sort of "inefficient trade-off" between $T$ and $V$. __endspoiler__ ### Hamiltonian mechanics Where the Lagrangian formalism mostly builds on the idea of path reparametrisations, hamiltonian mechanics builds on the idea of the phase space, and flow within that phase space. The defining equations for hamiltonian mechanics are: $\dd{H}{q} = -\dot p$ $\dd{H}{p} = \dot q$ And the liouville equation for an observable $f$: $$\partial_{t} f = \{H,f\} = -\sum\limits_{dimensions} \partial_{q}f \partial_{p}H- \partial_{p}f \partial_{q}H$$ Historically the Hamiltonian was used as the starting point of quantisation, which leads to the so called "canonical quantisation" ## Repe QM - Schrödinger picture - Heisenberg picture - Interaction picture - (Density matrix formalism) I'm not sure if you saw all four formulations of quantum mechanics, but you'll probably have seen the first three: I won't bore you with the Schrödinger picture, but go straight to the Heisenberg and Interaction picture: ### Heisenberg picture. Instead of having evolving states and fixed observables, we actually put the entire time evolution onto the operators. We can derive it from the Schrödinger equation by using the time evolution operator: $$d_{t}U = \frac{1}{i\hbar} H_{s}U$$ (this is exactly the same form the Schrödinger equation takes) We note that $A_{H}(t) = U(t)^{\dagger} A_{S}(t) U(t)$ - Take $\psi_{0} \to t$ - Evolve under observable at $t$ - Evolve back to $\psi_{0}$ Essentially we put the time evolution of the state onto the operator. From this prescription we can derive the time evolution of $A_{H}$ by taking the derivative. After applying the product rule and shuffling the terms we end up at $$d_{t}A_{H}(t) = \frac{1}{i\hbar}[A_{H}(t), H_{H}(t)] + \left ( \partial_{t}A_{S} \right )_{H}$$ The last term captures the case when the operator already had a time evolution in the Schrödinger picture. ### Interaction Picture We can see the interaction picture as a partial move to the Heisenberg picture. We often use it when we have a system that interacts with a time dependent hamiltonian. The interaction picture allows us to go into the frame where we only have the time dependent part. i.e we move from $H_{S}(t) = H_{0,S} + H_{1,S}(t)$ To an effective interaction with just $H_{1,S}$. We then get: $$i\hbar d_{t} \ket {\psi_{I}(t)} = H_{1,I}\ket{\psi_{I}(t)}$$ Where $H_{1,I}$ is the Hamiltonian, if we already include time evolutions under the base Hamiltonian $H_{0,S}$ $$H_{1,I}(t) = e^{-\frac{tH_{0,S}}{i\hbar}} H_{1,S} e^{\frac{tH_{0,S}}{i\hbar}}$$ And $\ket{\psi_{I}(t)}$ is the state if we already include the interaction of the base Hamiltonian $$\ket{\psi_{I}(t)} = e^{-\frac{tH_{0,S}}{i\hbar}}\ket{\psi_{S}(t)}$$ We can think of the interaction picture as something like changing the frame of reference. In classical physics we can for example solve problems in free-fall this way. #### Classical analogy Instead of having the constant uniform acceleration inside of the state (the position). We can say we solve the problem in a system without gravity, but then note that all our observables need to be corrected by the time evolution under said acceleration. That means that if I want to solve for a collision in free fall I can solve the collision without gravity, and then just move the center of mass according to the evolution under gravity. #### Concept question: Which ones are not a valid interaction pictures? A Moving in-to the center of mass frame B Moving into a rotating frame C Lorentz boosting D Moving into a frame, where the time dependent hamiltonian is stationary. __spoiler__ C mixes space and time, time has a special role in QM -> Not allowed D is valid, but not always useful __endspoiler__ ### Overview The main driver of the two new perspectives is the so called propagator $U(t_{0},t_{1})$. It takes a state at $t_{0}$ and takes it to a state at $t_{1}$. We used it in the above two pictures to integrate parts of our time evolution into the observables. We now want to understand the form of $U$ a bit better. For this we take a look at the position basis matrix elements of $U$: $$K(t',q',t,q) = \bra {q'} \hat U(t',t) \ket{q}$$ > Sidenote: We could just as well choose and other basis of the space to build this $K$. However the position basis is the easiest for our case. We call this matrix element the _propagator Kernel_. The name makes sense, because it is the integration kernel for the time evolution of a position basis state. We can interpret it as follows: - Assume we have a particle at $q$ at time $t$ - The probability for this particle to end up at $q'$ at time $t'$ - is given by the magnitude square of $K$. If we had chosen an other basis it would be the probability of transition for the other states. We can split the transition from $q \to q''$ by introducing an intermediate position at an intermediate time $q'$ and $t'$. By including all intermediate places. $$K(t'',q'',t,q) = \int dq' K(t'',q'',t',q')K(t',q',t,q)$$ The cool thing about the propagator kernel is that we now have a tool that time evolves position eigenstates: Starting from $$\psi(t,q) = \int dq' K(t,q,0,q') \psi(0,q')$$ We can see this by plugging in the delta function as $\psi(0,q') = \delta(q'-x_{0})$ $\psi(t,q) = K(t,q,0,x_{0})$ (Note that this is a bit imprecise, because Prof. Renner doesn't like treating "delta-function states", because it is technically only a distribution) ## Path integral formulation of QM The idea of the path integral formulation is now to construct the time evolution of $\psi$ via the concatenation of multiple $K$. From the above formula we already saw that we can do time "subdivision" by including all possible paths between the two endpoints. We will now take this idea to the extreme. We now consider a single slit (at $q_{1}$) in the middle of our path. This will mean that any valid path will have to pass $q_{1}$. We can express this by saying that the total time evolution will go from $t_{i} \to t_{1}$, where we will project onto the slit (i.e. we remove any components not on the slit). And then continue propagation from there. $W = U(t_{f},t_{1}) \ket{q_{1}}\bra{q_{1}} U(t_{1}, t_{i})$ Using the propagator kernel subdivision rule from before we note that this means we can write the propagator kernel as the product of the propagator to the slit, and then away from the slit: $$K_{tot} = K_{after}K_{before}$$ If there are multiple options to get to the other side of the screen we can simply add those together: $K_{tot}= K_{after,1} K_{before,1} + K_{after,2}+K_{before,2}$ >Sketch Here we realize how important it is to work with $K$, and not with $|K|^{2}$. Because only this way we actually get proper interference! ## Measuring stuff We now only had one type of measurement, namely the projection onto the intermediate position $q_{1}$. Remember we had $K = \bra{q_{f}} W \ket{q_{i}} = \bra{q_{f}} U_{f\leftarrow 1} \hat q_{1} U_{1\leftarrow i} \ket{q_{i}}$ We can now replace the $\hat q$ measurement with any other operator $A$ We call this new structure the one-point correlation function. It is the first step towards modeling interactions and general measurements. The easiest example is a "position-like" measurement. That means that $A$ is diagonal in the position basis. i.e. That it can be well represented as a function of $x$. With this we can model any screen, not just slits. ## Propagating knowledge We can have multiple screens with consecutive measurements, we call this multi-point correlation functions. There are some difficulties associated with this, especially if they don't commute (which is the interesting case)